拓扑数据分析(TDA)是来自数据科学和数学的工具,它开始在环境科学领域引起波浪。在这项工作中,我们寻求对TDA工具的直观且可理解的介绍,该工具对于分析图像(即持续存在同源性)特别有用。我们简要讨论理论背景,但主要关注理解该工具的输出并讨论它可以收集的信息。为此,我们围绕着一个指导示例进行讨论,该指导示例是对RASP等人研究的糖,鱼类,花朵和砾石数据集进行分类。 al。 2020年(Arxiv:1906:01906)。我们证明了如何使用简单的机器学习算法来获得良好的结果,并详细探讨了如何用图像级特征来解释这种行为。持续同源性的核心优势之一是它的解释性是可解释的,因此在本文中,我们不仅讨论了我们发现的模式,而且要考虑到为什么我们对持续性同源性理论的了解,因此可以期待这些结果。我们的目标是,本文的读者将更好地了解TDA和持续的同源性,能够确定自己的问题和数据集,为此,持续的同源性可能会有所帮助,并从应用程序中获得对结果的理解包括GitHub示例代码。
translated by 谷歌翻译
这项研究研究了在美国国税局(IRS)为税收审计选择的系统中,算法公平性问题。尽管算法公平的领域主要围绕着像个人一样对待的概念发展,但我们却探索了垂直平等的概念 - 适当地考虑到个人之间的相关差异 - 这在许多公共政策环境中都是公平性的核心组成部分。应用于美国个人所得税体系的设计,垂直权益与不同收入水平的纳税人之间的税收和执法负担的公平分配有关。通过与财政部和国税局的独特合作,我们使用匿名个人纳税人微型数据,风险选择的审计以及2010 - 14年度的随机审计来研究税务管理的垂直平等。特别是,我们评估了现代机器学习方法选择审核的使用如何影响垂直权益。首先,我们展示了更灵活的机器学习(分类)方法(而不是简单的模型)如何将审计负担从高收入纳税人转移到中等收入纳税人。其次,我们表明,尽管现有的算法公平技术可以减轻跨收入的某些差异,但它们可能会造成巨大的绩效成本。第三,我们表明,是否将低报告的风险视为分类或回归问题的选择是高度的。从分类转变为回归模型,以预测不足的审计转变会大大向高收入个人转移,同时增加收入。最后,我们探讨了差异审计成本在塑造审计分配中的作用。我们表明,对回报的狭窄关注会破坏垂直权益。我们的结果对整个公共部门的算法工具的设计具有影响。
translated by 谷歌翻译
最近的工作表明,培训的型号训练在相同的目标,并实现了对一致的测试数据的类似准确度的措施,尽管如此,仍可能对个体预测中的表现非常不同。这种不一致在高赌注环境中是不可取的,例如医学诊断和金融。我们表明,这种不一致的行为超出了对特征归因的预测,这同样对模型的可懂度具有负面影响,以及一个能够找到对象的追索权的能力。然后,我们将通过应用假设测试对使用随机选择的起始条件训练的一组模型的预测来减轻这些不一致的选择性合并来减轻这种不一致;重要的是,选择性集合可以在无法实现一致结果无法实现指定的置信水平的情况下弃权。我们证明了选择性集合之间的预测分歧是有界的,并且经验证明了选择性集合在保持低弃权率的同时实现一致的预测和特征归因。在几个基准数据集中,选择性集合达到零不一致预测点,额外的速率低1.5%。
translated by 谷歌翻译
聊天和个人助理形式的对话系统正在越来越纳入人们的生命。现代对话系统可能会考虑采用拟人的人物,模仿社会人口统计团体对用户来说更接近和值得信赖。但是,通过一个人的通过可能导致偏见的采用。在本文中,我们向对话系统中的角色偏见提供了第一个大规模研究,并对不同社会阶层,性取向,种族和性别的人物进行分析。我们将人格偏见定义为响应的有害差异(例如,不同的冒险程度,与有害陈述的不同程度)产生从采用不同的人口统计学。此外,我们介绍了一个开源框架,UnitPersonabias,以探索对话系统中的角色偏见。通过分析搅拌机和对话对话系统,我们观察到,与不使用任何一个人的人,采用人物实际上可以减少有害响应。此外,我们发现角色选择可以影响所生成的响应中的危害程度,因此应在部署前系统地进行系统。我们还分析了角色如何导致对特定人口统计数据的不同危害。
translated by 谷歌翻译
As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this paper, we demonstrate that post hoc explanations techniques that rely on input perturbations, such as LIME and SHAP, are not reliable. Specifically, we propose a novel scaffolding technique that effectively hides the biases of any given classifier by allowing an adversarial entity to craft an arbitrary desired explanation. Our approach can be used to scaffold any biased classifier in such a way that its predictions on the input data distribution still remain biased, but the post hoc explanations of the scaffolded classifier look innocuous. Using extensive evaluation with multiple real world datasets (including COMPAS), we demonstrate how extremely biased (racist) classifiers crafted by our framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases. CCS CONCEPTS• Computing methodologies → Machine learning; Supervised learning by classification; • Human-centered computing → Interactive systems and tools.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
With the rise in high resolution remote sensing technologies there has been an explosion in the amount of data available for forest monitoring, and an accompanying growth in artificial intelligence applications to automatically derive forest properties of interest from these datasets. Many studies use their own data at small spatio-temporal scales, and demonstrate an application of an existing or adapted data science method for a particular task. This approach often involves intensive and time-consuming data collection and processing, but generates results restricted to specific ecosystems and sensor types. There is a lack of widespread acknowledgement of how the types and structures of data used affects performance and accuracy of analysis algorithms. To accelerate progress in the field more efficiently, benchmarking datasets upon which methods can be tested and compared are sorely needed. Here, we discuss how lack of standardisation impacts confidence in estimation of key forest properties, and how considerations of data collection need to be accounted for in assessing method performance. We present pragmatic requirements and considerations for the creation of rigorous, useful benchmarking datasets for forest monitoring applications, and discuss how tools from modern data science can improve use of existing data. We list a set of example large-scale datasets that could contribute to benchmarking, and present a vision for how community-driven, representative benchmarking initiatives could benefit the field.
translated by 谷歌翻译
The study aims the development of a wearable device to combat the onslaught of covid-19. Likewise, to enhance the regular face shield available in the market. Furthermore, to raise awareness of the health and safety protocols initiated by the government and its affiliates in the enforcement of social distancing with the integration of computer vision algorithms. The wearable device was composed of various hardware and software components such as a transparent polycarbonate face shield, microprocessor, sensors, camera, thin-film transistor on-screen display, jumper wires, power bank, and python programming language. The algorithm incorporated in the study was object detection under computer vision machine learning. The front camera with OpenCV technology determines the distance of a person in front of the user. Utilizing TensorFlow, the target object identifies and detects the image or live feed to get its bounding boxes. The focal length lens requires the determination of the distance from the camera to the target object. To get the focal length, multiply the pixel width by the known distance and divide it by the known width (Rosebrock, 2020). The deployment of unit testing ensures that the parameters are valid in terms of design and specifications.
translated by 谷歌翻译
Despite many recent advancements in language modeling, state-of-the-art language models lack grounding in the real world and struggle with tasks involving complex reasoning. Meanwhile, advances in the symbolic reasoning capabilities of AI have led to systems that outperform humans in games like chess and Go (Silver et al., 2018). Chess commentary provides an interesting domain for bridging these two fields of research, as it requires reasoning over a complex board state and providing analyses in natural language. In this work we demonstrate how to combine symbolic reasoning engines with controllable language models to generate chess commentaries. We conduct experiments to demonstrate that our approach generates commentaries that are preferred by human judges over previous baselines.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译